听诊器录制的胸部声音为新生儿的偏远有氧呼吸健康监测提供了机会。然而,可靠的监控需要高质量的心脏和肺部声音。本文介绍了新生胸部声音分离的新型非负基质分子(NMF)和非负矩阵协同分解(NMCF)方法。为了评估这些方法并与现有的单源分离方法进行比较,产生人工混合物数据集,包括心脏,肺和噪音。然后计算用于这些人造混合物的信噪比。这些方法也在现实世界嘈杂的新生儿胸部声音上进行测试,并根据生命符号估计误差评估,并在我们以前的作品中发达1-5的信号质量得分。此外,评估所有方法的计算成本,以确定实时处理的适用性。总的来说,所提出的NMF和NMCF方法都以2.7db到11.6db的下一个最佳现有方法而言,对于人工数据集,0.40至1.12的现实数据集的信号质量改进。发现10S记录的声音分离的中值处理时间为NMCF和NMF的342ms为28.3。由于稳定且稳健的性能,我们认为我们的提出方法可用于在真实的环境中弃绝新生儿心脏和肺部。提出和现有方法的代码可以在:https://github.com/egrooby-monash/heart-and-lung-sound-eparation。
translated by 谷歌翻译
The ability to convert reciprocating, i.e., alternating, actuation into rotary motion using linkages is hindered fundamentally by their poor torque transmission capability around kinematic singularity configurations. Here, we harness the elastic potential energy of a linear spring attached to the coupler link of four-bar mechanisms to manipulate force transmission around the kinematic singularities. We developed a theoretical model to explore the parameter space for proper force transmission in slider-crank and rocker-crank four-bar kinematics. Finally, we verified the proposed model and methodology by building and testing a macro-scale prototype of a slider-crank mechanism. We expect this approach to enable the development of small-scale rotary engines and robotic devices with closed kinematic chains dealing with serial kinematic singularities, such as linkages and parallel manipulators.
translated by 谷歌翻译
The number of international benchmarking competitions is steadily increasing in various fields of machine learning (ML) research and practice. So far, however, little is known about the common practice as well as bottlenecks faced by the community in tackling the research questions posed. To shed light on the status quo of algorithm development in the specific field of biomedical imaging analysis, we designed an international survey that was issued to all participants of challenges conducted in conjunction with the IEEE ISBI 2021 and MICCAI 2021 conferences (80 competitions in total). The survey covered participants' expertise and working environments, their chosen strategies, as well as algorithm characteristics. A median of 72% challenge participants took part in the survey. According to our results, knowledge exchange was the primary incentive (70%) for participation, while the reception of prize money played only a minor role (16%). While a median of 80 working hours was spent on method development, a large portion of participants stated that they did not have enough time for method development (32%). 25% perceived the infrastructure to be a bottleneck. Overall, 94% of all solutions were deep learning-based. Of these, 84% were based on standard architectures. 43% of the respondents reported that the data samples (e.g., images) were too large to be processed at once. This was most commonly addressed by patch-based training (69%), downsampling (37%), and solving 3D analysis tasks as a series of 2D tasks. K-fold cross-validation on the training set was performed by only 37% of the participants and only 50% of the participants performed ensembling based on multiple identical models (61%) or heterogeneous models (39%). 48% of the respondents applied postprocessing steps.
translated by 谷歌翻译
Synergetic use of sensors for soil moisture retrieval is attracting considerable interest due to the different advantages of different sensors. Active, passive, and optic data integration could be a comprehensive solution for exploiting the advantages of different sensors aimed at preparing soil moisture maps. Typically, pixel-based methods are used for multi-sensor fusion. Since, different applications need different scales of soil moisture maps, pixel-based approaches are limited for this purpose. Object-based image analysis employing an image object instead of a pixel could help us to meet this need. This paper proposes a segment-based image fusion framework to evaluate the possibility of preparing a multi-scale soil moisture map through integrated Sentinel-1, Sentinel-2, and Soil Moisture Active Passive (SMAP) data. The results confirmed that the proposed methodology was able to improve soil moisture estimation in different scales up to 20% better compared to pixel-based fusion approach.
translated by 谷歌翻译
种植植被是降低沉积物转移率的实用解决方案之一。植被覆盖的增加可降低环境污染和沉积物的运输速率(STR)。由于沉积物和植被相互作用复杂,因此预测沉积物的运输速率具有挑战性。这项研究旨在使用新的和优化的数据处理方法(GMDH)的新版本(GMDH)预测植被覆盖的沉积物传输速率。此外,这项研究介绍了一种用于预测沉积物传输速率的新集合模型。模型输入包括波高,波速,密度覆盖,波力,D50,植被盖的高度和盖茎直径。独立的GMDH模型和优化的GMDH模型,包括GMDH Honey Badger算法(HBA)GMDH大鼠群群算法(RSOA)VGMDH正弦余弦算法(SCA)和GMDH颗粒swarm swarm优化率(GMDH-PSO),用于预测沉积率(GMDH-PSO) 。作为下一步,使用独立的GMDH的输出来构建集合模型。合奏模型的MAE为0.145 m3/s,而GMDH-HBA,GMDH-RSOA,GMDH-SCA,GMDH-PSOA和GMDH的MAE在测试水平为0.176 M3/s,0.312 M3/s,0.367/s,0.367 M3/s,0.498 m3/s和0.612 m3/s。集合模型的Nash Sutcliffe系数(NSE),GMDH-HBA,GMDH-RSOA,GMDH-SCA,GMDH-PSOA和GHMDH分别为0.95 0.93、0.89、0.89、0.86、0.86、0.82和0.76。此外,这项研究表明,植被覆盖的沉积物运输速率降低了90%。结果表明,合奏和GMDH-HBA模型可以准确预测沉积物的传输速率。根据这项研究的结果,可以使用IMM和GMDH-HBA监测沉积物的传输速率。这些结果对于管理和规划大盆地的水资源很有用。
translated by 谷歌翻译
这项工作表征了深度对线性回归优化景观的影响,表明尽管具有非凸性,但更深的模型具有更理想的优化景观。我们考虑了一个健壮且过度参数化的设置,其中测量的子集严重损坏了噪声,真正的线性模型将通过$ n $ layer-layer线性神经网络捕获。在负面方面,我们表明这个问题\ textit {do}具有良性景观:给定任何$ n \ geq 1 $,具有恒定概率,存在与既不是本地也不是全局最小值的地面真理的解决方案。但是,从积极的一面来看,我们证明,对于具有$ n \ geq 2 $的任何$ n $ layer模型,一种简单的次级方法变得忽略了这种``有问题的''解决方案;取而代之的是,它收敛于平衡的解决方案,该解决方案不仅接近地面真理,而且享有平坦的当地景观,从而避免了“早期停止”的需求。最后,我们从经验上验证了更深层模型的理想优化格局扩展到其他强大的学习任务,包括具有$ \ ell_1 $ -loss的深层矩阵恢复和深度relu网络。
translated by 谷歌翻译
在本文中,我们研究了推断空间变化的高斯马尔可夫随机场(SV-GMRF)的问题,其中的目标是学习代表基因之间网络关系的稀疏,特定于上下文的GMRF网络。 SV-GMRF的一个重要应用是推断来自空间分辨转录组学数据集的基因调节网络。当前有关SV-GMRF推断的工作基于正则最大似然估计(MLE),并且由于其高度非线性的性质而受到压倒性的计算成本。为了减轻这一挑战,我们提出了一个简单有效的优化问题,代替了配备强大的统计和计算保证的MLE。我们提出的优化问题在实践中非常有效:我们可以在不到2分钟的时间内解决具有超过200万变量的SV-GMRF的实例。我们将开发的框架应用于研究胶质母细胞瘤中的基因调节网络如何在组织内部空间重新连接,并确定转录因子Hes4和核糖体蛋白的显着活性是表征肿瘤血管周期壁iche中基因表达网络的特征抗性干细胞。
translated by 谷歌翻译
乳腺癌是一种常见且致命的疾病,但是早期诊断时通常可以治愈。尽管大多数国家都有大规模筛查计划,但就乳腺癌筛查的单一全球公认政策尚无共识。疾病的复杂性;筛查方法的可用性有限,例如乳房X线摄影,磁共振成像(MRI)和超声筛选;公共卫生政策都将筛查政策制定。资源可用性问题需要设计符合预算的政策,该问题可以作为约束的部分可观察到的马尔可夫决策过程(CPOMDP)建模。在这项研究中,我们提出了一个多目标CPOMDP模型,用于乳腺癌筛查两个目标:最大程度地减少因乳腺癌而死亡的终生风险,并最大程度地调整了质量调整后的寿命。此外,我们考虑了扩展的动作空间,该空间允许筛查乳房X线摄影超出筛查方法。每个动作都对质量调整后的终身年份和终身风险以及独特的成本都有独特的影响。我们的结果揭示了针对不同预算水平的平均和高风险患者的最佳解决方案的帕累托前沿,决策者可以将其用于实践制定政策。
translated by 谷歌翻译
我们考虑使用梯度下降来最大程度地减少$ f(x)= \ phi(xx^{t})$在$ n \ times r $因件矩阵$ x $上,其中$ \ phi是一种基础平稳凸成本函数定义了$ n \ times n $矩阵。虽然只能在合理的时间内发现只有二阶固定点$ x $,但如果$ x $的排名不足,则其排名不足证明其是全球最佳的。这种认证全球最优性的方式必然需要当前迭代$ x $的搜索等级$ r $,以相对于级别$ r^{\ star} $过度参数化。不幸的是,过度参数显着减慢了梯度下降的收敛性,从$ r = r = r = r^{\ star} $的线性速率到$ r> r> r> r> r^{\ star} $,即使$ \ phi $是$ \ phi $强烈凸。在本文中,我们提出了一项廉价的预处理,该预处理恢复了过度参数化的情况下梯度下降回到线性的收敛速率,同时也使在全局最小化器$ x^{\ star} $中可能不良条件变得不可知。
translated by 谷歌翻译
基于深度学习的(DL)申请越来越受欢迎,并以前所未有的步伐推动。虽然正在进行许多研究工作以增强深度神经网络(DNN) - DL应用的核心 - 云和边缘系统中这些应用的实际部署挑战,它们对应用程序的可用性的影响并未充分调查。特别是,部署不同虚拟化平台的影响由云和边缘提供的DL应用程序的可用性(在端到端(E2E)推理时间)仍然是一个打开的问题。重要的是,资源弹性(通过放大),CPU固定和处理器类型(CPU VS GPU)配置已经显示在虚拟化开销上有影响力。因此,本研究的目标是研究这些潜在决定的部署选项对E2E性能的影响,从而实现了DL应用程序的可用性。为此,我们在改变处理器配置时,我们测量四种流行的执行平台(即,裸机,虚拟机(VM),容器和容器中的裸机,虚拟机(VM),容器和容器)的影响(扩展,CPU固定)和处理器类型。本研究揭示了一系列有趣的,有时是反向直观的发现,可以用作云解决方案架构师的最佳实践,以便在各种系统中有效地部署DL应用程序。值得注意的发现是,解决方案架构师必须了解DL应用特征,特别是它们的预处理和后处理要求,能够最佳选择和配置执行平台,确定使用GPU,并决定有效扩展范围。
translated by 谷歌翻译